Local Distance Metric Learning for Nearest Neighbor Algorithm

نویسندگان

  • Hossein Rajabzadeh
  • Mansoor Zolghadri Jahromi
  • Mohammad Sadegh Zare
  • Mostafa Fakhrahmad
چکیده

Distance metric learning is a successful way to enhance the performance of the nearest neighbor classifier. In most cases, however, the distribution of data does not obey a regular form and may change in different parts of the feature space. Regarding that, this paper proposes a novel local distance metric learning method, namely Local Mahalanobis Distance Learning (LMDL), in order to enhance the performance of the nearest neighbor classifier. LMDL considers the neighborhood influence and learns multiple distance metrics for a reduced set of input samples. The reduced set is called as prototypes which try to preserve local discriminative information as much as possible. The proposed LMDL can be kernelized very easily, which is significantly desirable in the case of highly nonlinear data. The quality as well as the efficiency of the proposed method assesses through a set of different experiments on various datasets and the obtained results show that LDML as well as the kernelized version is superior to the other related state-of-the-art methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distance Metric Learning: A Comprehensive Survey

Many machine learning algorithms, such as K Nearest Neighbor (KNN), heavily rely on the distance metric for the input data patterns. Distance Metric learning is to learn a distance metric for the input space of data from a given collection of pair of similar/dissimilar points that preserves the distance relation among the training data. In recent years, many studies have demonstrated, both empi...

متن کامل

Exact Learning and Data Compression with a Local Asymmetrically Weighted Metric

This paper is concerned with a local asym-metric weighting scheme for the nearest neighbor classiication algorithm and a learning procedure, based on reinforcement, for computing the weights. Theoretical results show that this context dependent metric can learn exactly certain classes of concepts storing fewer examples that those required by the Euclidean metric. Moreover, computer experiments ...

متن کامل

Bias Reduction and Metric Learning for Nearest-Neighbor Estimation of Kullback-Leibler Divergence

Asymptotically unbiased nearest-neighbor estimators for KL divergence have recently been proposed and demonstrated in a number of applications. With small sample sizes, however, these nonparametric methods typically suffer from high estimation bias due to the non-local statistics of empirical nearest-neighbor information. In this paper, we show that this non-local bias can be mitigated by chang...

متن کامل

Multiple Closed-Form Local Metric Learning for K-Nearest Neighbor Classifier

Many researches have been devoted to learn a Mahalanobis distance metric, which can effectively improve the performance of kNN classification. Most approaches are iterative and computational expensive and linear rigidity still critically limits metric learning algorithm to perform better. We proposed a computational economical framework to learn multiple metrics in closed-form.

متن کامل

Flexible Metric Nearest Neighbor Classi ̄cation

The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled \training" objects that are closest to it. Closeness is usually de ̄ned in terms of a metric distance on the Euclidean space with the input measurement variables as axes. The metric chosen to de ̄ne this distance can strongly e®ect performance. An optimal choice depends on the proble...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018